|
[Sponsors] |
December 18, 2008, 20:17 |
Will GPU's play a part in the future of CFD?
|
#1 |
Guest
Posts: n/a
|
Hi All,
I see that GPU's coupled with NVIDIA's CUDA are now being used to get solutions of fluid dynamical problems with particle based methods such as SPH. In fact, a colleague of mine is beginning a PhD in this field. From what I have seen, the efficiency with which solutions can be achieved is mind blowing.I think that this is something that cannot be ignored by the CFD community. However, I have not yet seen GPU's being applied to mesh based systems. Does anyone know whether testing for future of GPU's in standard CFD is ongoing in industry? Will we see the day when software developers such as ANSYS and Adapco will be taking advantage of GPU technology. I guess moving this way could result in alot of disgruntled customers especially those with money invested in large clusters etc. Anyway I am no expert on this and I'd like to other peoples opinions. Thanks |
|
December 19, 2008, 01:23 |
Re: Will GPU's play a part in the future of CFD?
|
#2 |
Guest
Posts: n/a
|
GPGPU computing (General Purpose GPU computing) is certainly a game changer. The potential speedup for SIMD-rich code is 10 to 100, using current high-end GPUs versus CPUs. Just as important as the speedup is the fact that such a GPU consumes merely one to two times as much electrical power as the CPU. The GPGPUs are basically the reincarnation of the array or vector coprocessors of the '80s and '90s, but this time the broad consumer graphics market has driven their prices to levels on par with those of CPUs. The future of HPC appears to be clusters of GPUs or similar massively multicore processors. Both NVIDIA and AMD are hurrying to deliver double-precision arithmetic on GPUs for the HPC market (the consumer graphics market is generally satisfied with single-precision arithmetic).
While hardware price is not a barrier to entry (you can put together a four teraflop machine in a single computer case for less than $3000), it is rather the software framework that is currently a barrier. NVIDIA has certainly been leading the way with their CUDA. Wolfram Research has already demonstrated a version of Mathematica that achieves speedup in that range using CUDA and a NVIDIA card. AMD has their Brook+ compiler. However, reportedly, both of these are not easy for code developers to master. The OpenCL specification (initially drawn up by Apple) released ten days ago by the Khronos Group is a major step forward in an open standard for vendor-agnostic GPGPU programming. However, it still appears to be too close-to-the-metal for many developers to embrace. Michael Wolfe, a compiler engineer at the Portland Group, in the November issue of Linux Journal magazine, talks about the feasibility of C/C++ and Fortran compilers that automate most of the memory and process management that currently must be coded by the developer in CUDA/Brook+ (and expectedly in OpenCL code), and automate the generation of parallelized code with the help of a few OpenMP-like compiler directives from the code developer. Adding to the complexity of the parallelization effort is that large-scale HPC will still need MPI to distribute computations among nodes in clusters, while using OpenCL or other approaches to accelerating each node's process via its GPU/s. So the current uncertainty for code developers is whether to jump in and radically rewrite their codes to take advantage of OpenCL (which is basically C with a lot of GPU management functions) or whether to wait for C++ or Fortran compilers that will possibly need them to only make OpenMP style changes in their codes. Another uncertainty is about what Intel will bring to the table. Intel will certainly not lie back and allow NVIDIA and AMD to steal the HPC market away from it. However, Intel's likely response in the form of its x86-based Larrabee GPU seems to be a little late to market. Intel is certainly maintaining a presence in the OpenCL initiative. At any rate, the ascent of cheap and powerful SIMD/MIMD coprocessors marks a watershed for HPC codes. I have a feeling that codes whose developers are not nimble enough to take advantage of these massively parallel and power-efficient coprocessors will disappear from the market as their more nimble rivals produce turn-around times that leave them in the dust. I expect that many of the unmaintainable in-house legacy codes will disappear when they prove too difficult to successfully port to GPUs, and as newer better-written codes outperform them by ridiculous margins. |
|
December 19, 2008, 01:29 |
Re: Will GPU's play a part in the future of CFD?
|
#3 |
Guest
Posts: n/a
|
And, of course, Microsoft is looking to grab some share of the GPGPU computing market with the upcoming DirectX11
|
|
December 20, 2008, 10:53 |
Re: Will GPU's play a part in the future of CFD?
|
#4 |
Guest
Posts: n/a
|
Yes, the commercial CFD software industry is looking into the use of GPUs as a computational resource. Some of the current work that's already been done is quite impressive. I've seen a real-time simulation of smoke done with this technology for a movie. It was quite impressive. However, the simulation was relatively low-fidelity. (This is not a slam, just a comment on the numerical techniques applied at the time.) One key toward use of these products is the double precision GPUs that are just now coming online and whether or not customers will buy them.
|
|
December 21, 2008, 03:34 |
Re: Will GPU's play a part in the future of CFD?
|
#5 |
Guest
Posts: n/a
|
||
December 21, 2008, 09:41 |
Re: Will GPU's play a part in the future of CFD?
|
#6 |
Guest
Posts: n/a
|
Thanks Ananda Himansu and John Chawner for your interesting opinions. Whilst GPGPU is a wonderful advance, I have seen that CUDA seems to use 32 bits integer for addressing. I assume it is only a matter of time before 64 bit will be used?
|
|
December 21, 2008, 19:58 |
Re: Will GPU's play a part in the future of CFD?
|
#7 |
Guest
Posts: n/a
|
I have not seen anything about 64 bit addressing, but then I have merely been skimming the news about this, and dreaming, but have had no opportunity to delve into it.
|
|
December 21, 2008, 22:50 |
Re: Will GPU's play a part in the future of CFD?
|
#8 |
Guest
Posts: n/a
|
For mesh-based methods, the solution really needs to be smooth so that high order methods can be used. Low ordered unstructured computations lack sufficient regularity for a GPU to outperform a CPU. Implicit methods are worse for GPU because constructing a preconditioner is complicated (involves irregular access) and even if high order elements are used, the best preconditioners are still constructed from very sparse matrices. The hardware really needs to change before implicit methods on GPU can beat a good algorithm on CPU.
|
|
December 21, 2008, 23:01 |
Re: Will GPU's play a part in the future of CFD?
|
#9 |
Guest
Posts: n/a
|
||
December 22, 2008, 16:47 |
Re: Will GPU's play a part in the future of CFD?
|
#10 |
Guest
Posts: n/a
|
Nobody is sustaining 5 GFlops on sparse linear algebra with one CPU. GPU performance for sparse linear algebra is currently pretty poor, especially in the preconditioner. The Cell is more amenable to this stuff, but it takes a significant amount of effort and important preconditioners (such as BoomerAMG and ML) have not to my knowledge been ported to Cell or GPU. Here is a nice look at the current state of sparse mat-vec: http://crd.lbl.gov/~oliker/papers/SIAMPP08-oliker.pdf
I don't know what ANSYS is doing, but the numbers he throws out on the video are ridiculous so they certainly don't refer to sparse linear algebra. |
|
December 22, 2008, 19:20 |
Re: Will GPU's play a part in the future of CFD?
|
#11 |
Guest
Posts: n/a
|
Several months ago I ran across a link to a company named Accelereyes that is developing software to replace/supplement routines in Matlab. They have a product named Jacket in beta stage. The link to the company is http://www.accelereyes.com/ . It you can download a User Guide for Jacket, and it looks like you can download a test version of the program.
|
|
December 23, 2008, 11:01 |
Re: Will GPU's play a part in the future of CFD?
|
#12 |
Guest
Posts: n/a
|
Both NVIDIA and AMD/ATI are working on this but I think AMD/ATI looks more interesting since they are porting AMD math libraries (LAPACK ando so on) for the GPU (ACML-GPU). I think you can get a free copy if you email them.
The problem is still the amount of memory available for large simulations but the technology looks promising. |
|
December 23, 2008, 17:34 |
Re: Will GPU's play a part in the future of CFD?
|
#13 |
Guest
Posts: n/a
|
I write lattice-Boltzmann code using Brook in 3 years ago. My experience the speed up was around 3 times comparing CPU. The GPU was Nvidia GeForce something (medium category) and CPU was AMD Athlon 64 3400+. In that time the mesh has to be slitted into 64K packets and send to GPU I tried bigger packets but the result was runtime error. I measure the single packet speed up: around 50 times. So the slow part was the 64k splitting and transfer between CPU and GPU. Other interesting thing was for me I could highly optimize the code if I count some part twice or more in GPU and not transfer the counted data to board memory trough CPU. In fact if I did not do that the CPU had same speed as GPU.This is my experience using Brook with lattice-Boltzmann (LBM) I know it is not so up to date and in addition the LBM quite god parallelizable. Even tough I think the weak point is the data transfer between GPU and CPU and it is still remain nowadays as well.
|
|
December 29, 2008, 16:36 |
Re: Will GPU's play a part in the future of CFD?
|
#14 |
Guest
Posts: n/a
|
http://www.linux.com/feature/148339
http://www.sgi.com/company_info/news...er/opengl.html Silicon Graphics, Inc. released OpenGL to the open source linux community for development of graphic applications. As a leader in visual computing, SGI has contributed more open source to the Linux world than any other HPC manufacturer. SGI continues to focus on HPC areas of computing-data management-visualization. I think one of the limiting factors of using a GPU-based solution is that you can't run many GPUs in parallel...yet. Another problem is that as CFD/CAE technology continues to develop, the size and detail of the modeling increases so the resulting file size is also getting larger and larger. In order to keep up, scaling clusters, memory and high-performance storage will be more likely to be the first way to enable fast, accurate results. I think there will become a way to leverage those standard components thru software to deliver the high-resolution images/resulting files of CFD computing. SGI is playing with this notion today with our new VUE technology. Today, we can scale memory up to 127 TB (yes, that is a T) so that computations in processor can be done on huge in-memory files/models. This eliminates the bottle neck of the network and storage I/O. Do you know of anyone who is 'clustering' GPUs for handling the large images/data required in some CFD computing? Curious. Shelly |
|
January 21, 2009, 14:47 |
Re: Will GPU's play a part in the future of CFD?
|
#15 |
Guest
Posts: n/a
|
Yes, a supercomputer in Japan called "Tsubame" is now the 29th fastest supercomputer in the world thanks to adding in GPU clusters.
http://www.goodgearguide.com.au/arti...rcomputer?pp=1 I'm not sure if they're doing any CFD related work, but as for transfer between GPUs in the tesla workstations, the rates are apparently very good. |
|
January 22, 2009, 06:04 |
Re: Will GPU's play a part in the future of CFD?
|
#16 |
Guest
Posts: n/a
|
Of course!! Lookup ACML GPU
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
College Student / Future in CFD | Brian Tache | Main CFD Forum | 3 | May 5, 2020 08:48 |
What is the future CFD environment? | TP | Main CFD Forum | 3 | January 23, 2007 11:19 |
Future of CFD | M | Main CFD Forum | 2 | September 21, 2005 09:46 |
Which is better to develop in-house CFD code or to buy a available CFD package. | Tareq Al-shaalan | Main CFD Forum | 10 | June 13, 1999 00:27 |
public CFD Code development | Heinz Wilkening | Main CFD Forum | 38 | March 5, 1999 12:44 |