CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

why do commercial cfd products provide single precision?

Register Blogs Community New Posts Updated Threads Search

Like Tree7Likes
  • 1 Post By Continuum
  • 1 Post By flotus1
  • 1 Post By arjun
  • 1 Post By arjun
  • 1 Post By flotus1
  • 2 Post By sbaffini

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 25, 2022, 09:13
Default why do commercial cfd products provide single precision?
  #1
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
One year ago I asked if devs here develop their solvers to support single precision.
Paolo said that he only uses double precision because single precision is way too imprecise.

But many commercial solvers seem to provide single, mixed, and double precision solvers. Why?

ANSYS documentation even states that single precision is enough for most cases, except for a few, where high precision is required. Is that really the case? I think modern hardware has become more than powerful enough to fully support double precision. Even GPUs now support double precision codes.

Playing devil's advocate, I would say that their codes aren't single or double precision.

And I think that's actually a good decision from a business perspective. From a business perspective, it makes sense for ANSYS and other big companies to be able to use the highest precision that is available in the current/future generation hardware, 10,20,30,40 years from today.

It's not just about using future generation hardware. They can see benefits from using higher precision for current tasks which might need very high precision. One example would be point-inside-or-outside-polygon tests in mesh generation codes for particularly problematic cases.

So, most likely their code is generic and they just compile with REAL defaulting to REAL(KIND=8) or REAL(KIND=WHATEVER_FUTURE_PRECISION_IS_AVAILABLE).

But I still don't understand why they still provide single precision solvers? Like what's the use of single precision solvers when users can just as easily use double precision?
aerosayan is offline   Reply With Quote

Old   February 25, 2022, 11:16
Default
  #2
New Member
 
Continuum
Join Date: Aug 2009
Posts: 19
Rep Power: 17
Continuum is on a distinguished road
I believe this move is in support of legacy 32 bit applications. These are memory-limited to about 2.5 GB (with reserve for the OS) and a single precision complilation gives a larger mesh count for the given limit. Of course, compiled to 64 bit, these limits are removed for the most part. I have several legacy applications that live in the 32 bit realm and need new compilers.



Years back, I recall Flow3D gave you a SP and DP option when running.


But as you ask, there is really no reason now (unless you are compiler-bound somehow).



Regards
aerosayan likes this.
Continuum is offline   Reply With Quote

Old   February 25, 2022, 12:05
Default
  #3
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
My counter-argument is this: why would you use a double precision solver, when you can get the answers you need with single precision?
SP runs faster. Which means more design iterations per unit of time. And/or less hardware costs to run an analysis. And/or less software license costs. And/or less electricity costs.
And for many applications, single precision floating point calculations are indeed more than enough accuracy.
Most RANS simulations I come across on a daily basis would not benefit from higher numerical precision. You won't get a solver to converge further than what FP32 can provide. Hence no need for FP64.
There are cases where the additional precision is required to get the simulation to run at all. And for one-off simulations, I also like to just use double precision. One thing less to worry about. But I don't think this is a strong enough argument to abandon the advantages of lower precision entirely.

I would also like to point out that double precision is not a magic bullet. Speaking of your example "point-inside-or-outside-polygon tests", you either handle the edge-cases in a way that would also work in single precision, or you are stuck with the exact same problem while doing double precision calculations.

Quote:
Even GPUs now support double precision codes.
My observation of the GPU market is quite different. There is a massive downward trend for FP64 compute.
For GPUs that are not sold specifically marketed for GPU acceleration, the divider between FP32 and FP64 performance is getting larger each generation. Nvidia is currently at a factor of 32:1.
GPUs that don't have their FP64 performance nerfed are getting pretty rare these days. Quadros from the Fermi generation basically all had a divider of 2:1. In the following generations, this factor quickly increased for all but the highest-end models. And nowadays, I am not even sure if there is a current-gen "Quadro equivalent" graphics card with FP64 performance worth noting.
These days, development time and die space goes into AI-specific execution units.

Quote:
I think modern hardware has become more than powerful enough
From my personal point of view, computers will never be powerful enough. But maybe that's just me
aerosayan likes this.
flotus1 is offline   Reply With Quote

Old   February 25, 2022, 13:04
Default
  #4
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
@flotus1 thanks for your answer.

so how should we consider developing our code? specifically how do we determine when REAL*4 would not be enough and we need REAL*8?

obviously one easy example of such a scenario would be when we MIN_ELEMENT_SIZE of the grid is so small that we need REAL*8.

but i'm not sure for which applications of our solvers, we don't need REAL*8. i would like to know how one might do an analysis to determine that REAL*4 would be enough.

few optimization strategies have worked for me, that i don't mind sharing : things like non-dimensionalization and storing the data as REAL*4 but using them as REAL*8 have been extremely useful (used only when higher precision is required, and we can guarantee data isn't lost by storing it as REAL*4)

currently i want to use REAL*4 mainly to reduce memory requirement of my grid generators by 50%. i have found some amazing optimization strategies for my quadtree grid generators, and they work! but the triangle grid generators seem to be brutally memory hungry.
aerosayan is offline   Reply With Quote

Old   February 25, 2022, 16:06
Default
  #5
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,290
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Fluent offers because it is very old code and starccm provides because good number people involved in developing it were fluent developers.

In newer codes I do not see this trend of providing different precisions.

I personally prefer double precision because i have seen some cases that did not converge with single precision but converged with double precision.

I have even added long double support in Wildkatze and one model uses it. This model i am developing thinking that it shall be able to handle vacuum like situations. So density shall be dropping to 1E-7 or more.
aerosayan likes this.
arjun is offline   Reply With Quote

Old   February 25, 2022, 16:24
Default
  #6
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
Quote:
Originally Posted by arjun View Post
Fluent offers because it is very old code and starccm provides because good number people involved in developing it were fluent developers.

In newer codes I do not see this trend of providing different precisions.

I personally prefer double precision because i have seen some cases that did not converge with single precision but converged with double precision.

I have even added long double support in Wildkatze and one model uses it. This model i am developing thinking that it shall be able to handle vacuum like situations. So density shall be dropping to 1E-7 or more.
Thanks for your answer Arjun.

Yes, fluent is an old code, and it's a safe bet to use double precision, but as flotus1 said, single precision is extremely useful if you know that the results will be valid even with single precision.

I can think of heat conduction/diffusion through metal as an example case where we don't need very high precision. So if the code can use single precision, we can save memory and time.

But I don't know which cases in fluid flow can be solved with single precision. Probably low speed flows with RANS would be the best use case scenario.

It's somewhat starting to make sense to me I guess.
aerosayan is offline   Reply With Quote

Old   February 26, 2022, 06:03
Default
  #7
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,290
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by aerosayan View Post
Thanks for your answer Arjun.

Yes, fluent is an old code, and it's a safe bet to use double precision, but as flotus1 said, single precision is extremely useful if you know that the results will be valid even with single precision.


I also agree with it mostly. Theoretically there is no problem in accepting it. Practically it is alright more than 95 % of the time but ONCE in a while you end with situation where single precision won't cut it.

I give two example where it counted:

1. When i worked with CD Adapco there was case that came from client and no one could converge for almost 2 months. Until someone decided to run it in double precision. Wasted two months for the support engineer involved.

2. I had a case last year with half a million triangles. The complain was it took lots of time to converge. I checked and it turned out AMG was barely converging even after using up all 30 cycles.
it was very strange that it behaved that way. It turned out that due to precision, sum of off diagonals are slightly bigger than diagonal. In single precision the same would be much worse.

Theoretically one can say that this shall never happen but in practice, things do not go as book says.

Off course i put a guard against it and it now cost some time too.


precision can matter sometimes. The whole point is that accuracy does not count if your simulation does not converge. Once converged its all okay i guess.
aerosayan likes this.
arjun is offline   Reply With Quote

Old   February 26, 2022, 07:47
Default
  #8
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Quote:
Originally Posted by aerosayan View Post
so how should we consider developing our code? specifically how do we determine when REAL*4 would not be enough and we need REAL*8?
I don't think there is a simple and straightforward answer to that. Estimating the effect of numerical precision on the results in general, or further usage of the results somewhere later down the line, requires careful evaluation of every algorithm in your code.
This topic is definitely full of traps, waiting to bite an unsuspecting developer in the ass.

Your question was originally about single precision in large commercial codes. With decades of history, huge development teams, and a fallback option to double precision. I made my case for single precision under this pretense.

For developing your own solver, the answer is much easier. When in doubt, just use double precision. Your code will surely have enough other unique selling points, otherwise you would not develop your own code. No need to put in all the additional work just to keep the advantages of running single precision on top of that.
aerosayan likes this.
flotus1 is offline   Reply With Quote

Old   February 26, 2022, 12:50
Default
  #9
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,195
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Quote:
Originally Posted by aerosayan View Post
One year ago I asked if devs here develop their solvers to support single precision.
Paolo said that he only uses double precision because single precision is way too imprecise.
I don't want to be picky, but what I said here Are you writing solvers for single or double precision?

is, roughly speaking, that I am not good enough to write a working single precision solver. I didn't actually try, but there are issues to carefully consider almost everywhere, and I don't really have time. Once you have them figured out, I think, you are just left with compiling the same code version with different flags. Single precision has indeed advanatges if you have it (speed, memory, communication and storage), so why not?
andy_ and aerosayan like this.
sbaffini is offline   Reply With Quote

Old   February 26, 2022, 12:57
Default
  #10
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
Quote:
Originally Posted by sbaffini View Post
I don't want to be picky, but what I said here Are you writing solvers for single or double precision?

is, roughly speaking, that I am not good enough to write a working single precision solver. I didn't actually try, but there are issues to carefully consider almost everywhere, and I don't really have time. Once you have them figured out, I think, you are just left with compiling the same code version with different flags. Single precision has indeed advanatges if you have it (speed, memory, communication and storage), so why not?

Yes some precision issues can be tricky. I just figured out yesterday how to generate extremely accurate meshes with single precision co-ordinates.
aerosayan is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Double precision does not converge, Single precision does jmmichie Main CFD Forum 0 May 7, 2015 09:16
Double precision case open in single precision solver. cinwendy FLUENT 6 March 27, 2013 04:19
Single Precision and Double Precision wateraction FLUENT 1 May 27, 2011 13:16
ASME CFD Symposium - Call for Papers Chris Kleijn Main CFD Forum 0 September 25, 2001 11:17
CFD - Trends and Perspectives Jonas Larsson Main CFD Forum 16 August 7, 1998 17:27


All times are GMT -4. The time now is 10:31.