|
[Sponsors] |
128 core cluster E5-26xx V4 processor choice for Ansys FLUENT |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
July 11, 2017, 16:43 |
|
#21 |
New Member
Join Date: May 2013
Posts: 26
Rep Power: 13 |
comparison benchmarking Epyc vs Skylake SP starts:
http://www.anandtech.com/show/11544/...-the-decade/12 |
|
July 11, 2017, 17:28 |
|
#22 |
Senior Member
Join Date: Mar 2009
Location: Austin, TX
Posts: 160
Rep Power: 18 |
Found a Euler3d benchmark for Skylake SP:
https://hothardware.com/reviews/inte...-review?page=6 Still nothing for EPYC that I see |
|
July 11, 2017, 17:40 |
|
#23 | ||
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,754
Rep Power: 66 |
Quote:
Quote:
|
|||
July 11, 2017, 17:41 |
|
#24 |
Senior Member
Join Date: Mar 2009
Location: Austin, TX
Posts: 160
Rep Power: 18 |
The page I linked to has a Euler3d benchmark, which is a CFD benchmark.
|
|
July 11, 2017, 17:51 |
|
#25 | |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Quote:
|
||
July 11, 2017, 18:01 |
|
#26 |
New Member
Join Date: May 2013
Posts: 26
Rep Power: 13 |
just started a new thread for results of benchmarks of Epyc and Xeon Skylake SP:
Epyc vs Xeon Skylake SP |
|
July 11, 2017, 18:05 |
|
#27 | |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,754
Rep Power: 66 |
Quote:
Because bandwidth has a clear influence. And as hpvd said earlier: no matter how many cores, always 8-channel DDR4 (quad-channel DDR4). Once you choose this, there's less than a dozen options and the choice of what you should build boils quickly down to what you can afford. It's quite easy to build a high-end system for a CFD application. In other applications (traditional HPC) that is not slowed by memory bandwidth, you can end up in all sorts of headaches. If you are not convinced, feel free to ignore me. |
||
July 17, 2017, 03:01 |
|
#28 |
New Member
Ramón
Join Date: Mar 2016
Location: The Netherlands
Posts: 11
Rep Power: 10 |
Thank you for the direct links to all the benchmarks! However, I still eagerly await a direct benchmark comparisson with actual CFD software to quantify the differences.
There is another complication, the day the benchmarks came out was exactly the day my director signed-off on the the purchase order for our new system. So a 128 core system with the E5-2667 V4 is coming our way this summer When it is ready I will need to do some of the official Ansys benchmarks to quantify the speed-up towards our hardware vendor. I can try to post some of the results here as well, if you guys are interested. Thanks for the support! |
|
October 27, 2017, 12:18 |
|
#29 |
Member
Join Date: May 2009
Posts: 54
Rep Power: 17 |
Thank you for an insightful discussion on the cluster selection for CFD.
Does anyone have any feedback on the use of ARM architecture? The idea of 48-core nodes seems to go against many of the points brought up here in terms of performance. There are some OpenFOAM benchmarks briefly summarized here: https://developer.arm.com/-/media/de..._CFDvFinal.pdf |
|
January 14, 2018, 03:37 |
|
#30 | |
Member
Ivan
Join Date: Oct 2017
Location: 3rd planet
Posts: 34
Rep Power: 9 |
Quote:
|
||
January 19, 2018, 04:53 |
|
#31 |
New Member
Ramón
Join Date: Mar 2016
Location: The Netherlands
Posts: 11
Rep Power: 10 |
Dear Noco, the following specs apply to our cluster:
8 x slave node: - Dell Poweredge R630 - 120 GB SSD - 2x Intel Xeon E5-2667 V4 - 8x 16 GB DIMM 2400 MHz - Mellanox ConnectX-3 dual port VPI FDR Head node: - Some old EDX server we had laying arround - 800 GB SSD - 2x Intel Xeon E5645 - 12x 8 GB DIMM 1333 MHz - Both head and slave nodes have Windows server installed with Windows HPC as cluster software. These are the performance figures, scaled it to ANSYS "solver rating" in which they define "1 solve = 25 iterations" and "solver rating = amount of single solves in 24 hours". Performed in Ansys FLUENT 18.2. Aircraft_2million cells benchmark: -1 node (16 cores) - 2168 -2 node (32 cores) - 3793 -4 node (64 cores) - 6545 -6 node (96 cores) - 9521 -8 node (128 cores) - 11811 Sedan_4million cells benchmark: -1 node (16 cores) - 1557 -2 node (32 cores) - 2727 -4 node (64 cores) - 5339 -6 node (96 cores) - 8028 -8 node (128 cores) - 9201 Aircraft_14million cells benchmark: -1 node (16 cores) - 252 -2 node (32 cores) - 477 -4 node (64 cores) - 851 -6 node (96 cores) - 1228 -8 node (128 cores) - 1621 Exhaust system_33million cells benchmark: -1 node (16 cores) - 83 -2 node (32 cores) - 165 -4 node (64 cores) - 311 -6 node (96 cores) - 459 -8 node (128 cores) - 593 *note that these are benchmarks conducted a few months ago. These do not include recent updates to fix the Intel leaks. Will run benchmarks after those as well. However, I think as many people will advice you, do not buy this generation of processors anymore! Last edited by F1aerofan; January 19, 2018 at 08:40. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
looking for a smart interface matlab fluent | chary | FLUENT | 24 | June 18, 2021 10:07 |
Superlinear speedup in OpenFOAM 13 | msrinath80 | OpenFOAM Running, Solving & CFD | 18 | March 3, 2015 06:36 |
Problem in using parallel process in fluent 14 | Tleja | FLUENT | 3 | September 13, 2013 11:54 |
problem in using parallel process in fluent 14 | aydinkabir88 | FLUENT | 1 | July 10, 2013 03:00 |
Fluent on a Windows cluster | Erwin | FLUENT | 4 | October 22, 2002 12:39 |