|
[Sponsors] |
February 19, 2020, 07:45 |
Supercomputing & accelerators
|
#1 |
Member
Join Date: Mar 2009
Posts: 36
Rep Power: 17 |
I don't want to put this in the HW section as its too general and not related to specific requirements.
I see articles like: https://www.techpowerup.com/263990/a...304-epyc-cores and https://www.techpowerup.com/263976/u...-supercomputer ... and wonder just how useful accelerators are for memory intensive FV (and I suppose, FE too) methods. If anywhere would have someone who might have experience in use of such machines (and be willing to hint at whether it really works or not) - it'd be here. Accelerator manufacturers *cough Nvidia cough* will show a canned benchmark of a specific subsection of a niche problem to show improvements - but how does that translate to the real world? Are groups doing themselves a disservice by including significant amounts (in both cost and power budgets) of GPU in their CPU/GPU mix. Should I make a distinction between GPU and FPGA here? Probably. Anyone any input? |
|
February 19, 2020, 12:14 |
|
#2 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,761
Rep Power: 66 |
The article is misleading because it portrays Shasta as being used for weather prediction and CFD, which it isn't. Shasta is a general purpose supercomputer.
Well, we are CFD'ers here so we don't see the benefit of GPU acceleration currently. But there are tons of fields that do. It make sense for them. Last edited by LuckyTran; February 21, 2020 at 11:39. |
|
February 19, 2020, 12:32 |
|
#3 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,896
Rep Power: 73 |
GPUs are gaining a lot of interest in the CFD field and interesting results are appearing, for example https://insidehpc.com/2019/11/tackli...supercomputer/
I cannot say more about the potentiality in future, stay tuned on the news appearing in Internet |
|
February 20, 2020, 13:17 |
|
#4 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
GPU acceleration definitely has its place in CFD-HPC, there can be no doubt about it. But when we take a look at the example linked by FMDenaro, we can already make out some limitations.
The code was developed with GPU acceleration in mind right from the start. And the geometries they are dealing with are rather simple, further facilitating their parallelization approach. When we look at the other end of the spectrum -commercial, general purpose CFD solvers- things are not looking so bright any more. Everyone has to have GPU acceleration, it's almost a mandatory feature for marketing purposes, from back in the days when GPU acceleration was hyped by everyone (cough cough machine learning ) The quality of implementation ranges from "yes, we do have GPU acceleration" to "actually, works pretty good". For many codes, GPU acceleration is only valid because the licensing scheme has been geared towards it. If there was no money to be saved on license costs, nobody would think about using GPU acceleration for some of these codes. Just throw another CPU compute node at it instead, and call it a day. You can even use ALL the models, not just those that already got the GPU treatment. But there are exceptions from the disappointing reality that is GPU acceleration in most commercial solvers. I recently did some tests with Abaqus/Standard (FEA solver), and GPU acceleration actually works as advertised. I was able to get around 2.5x speedup, compared to using only 8 CPU cores, and there is no model size restriction tied to GPU memory. With an ageing Nvidia Quadro K6000. Now, if this is an indicator of good GPU acceleration, or poor CPU performance to begin with, is up for debate. One of the definitions of HPC is "computing at a bottleneck". It is not entirely uncommon to see claims about xx times speedup thanks to GPU acceleration. Without knowing how optimized the CPU implementation was, a statement like this is rather meaningless. |
|
February 20, 2020, 17:29 |
|
#5 |
Member
EM
Join Date: Sep 2019
Posts: 59
Rep Power: 7 |
I remember reading 3 or 4 years ago how the uk's met office
was porting their codes to use nvidia's gpu. I have not been able to locate the article again. gpu programming is difficult and programmers are expected to minimize memory transfers between on and off chip and maximize the use of local shared memory and registers. Nvidia provides the best tflops and support but amd the most tflops for the money. The following jfm ref uses two amd gpus as the main computing engines for a spectral dns: https://doi.org/10.1017/jfm.2018.811 -- |
|
Tags |
apu, gpu, hpc, supercomputing |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Do any of the GPU accelerators for OpenFOAM actually work for real cases? | kyle | OpenFOAM | 2 | January 28, 2014 06:05 |
Supercomputing IBM AIX | oevermann | OpenFOAM Installation | 1 | October 21, 2005 14:41 |
Supercomputing IBM AIX | oevermann | OpenFOAM Installation | 0 | October 21, 2005 06:32 |
commands for supercomputing | student | Main CFD Forum | 2 | November 15, 2002 06:27 |
Speaking about CFD, Supercomputing | Jungsoo Suh | Main CFD Forum | 2 | August 9, 1998 19:53 |